2 research outputs found
Tensor Ensemble Learning for Multidimensional Data
In big data applications, classical ensemble learning is typically infeasible
on the raw input data and dimensionality reduction techniques are necessary. To
this end, novel framework that generalises classic flat-view ensemble learning
to multidimensional tensor-valued data is introduced. This is achieved by
virtue of tensor decompositions, whereby the proposed method, referred to as
tensor ensemble learning (TEL), decomposes every input data sample into
multiple factors which allows for a flexibility in the choice of multiple
learning algorithms in order to improve test performance. The TEL framework is
shown to naturally compress multidimensional data in order to take advantage of
the inherent multi-way data structure and exploit the benefit of ensemble
learning. The proposed framework is verified through the application of Higher
Order Singular Value Decomposition (HOSVD) to the ETH-80 dataset and is shown
to outperform the classical ensemble learning approach of bootstrap
aggregating
The sum of tensor networks
Tensor networks (TNs) have been gaining interest as multiway data analysis
tools owing to their ability to tackle the curse of dimensionality and to
represent tensors as smaller-scale interconnections of their intrinsic
features. However, despite the obvious advantages, the current treatment of TNs
as stand-alone entities does not take full benefit of their underlying
structure and the associated feature localization. To this end, embarking upon
the analogy with a feature fusion, we propose a rigorous framework for the
combination of TNs, focusing on their summation as the natural way for their
combination. This allows for feature combination for any number of tensors, as
long as their TN representation topologies are isomorphic. The benefits of the
proposed framework are demonstrated on the classification of several groups of
partially related images, where it outperforms standard machine learning
algorithms